What privacy experts are saying about AI: Warnings and opportunities
Posted: September 4, 2024
If you talk with privacy experts about what they think about Artificial Intelligence (AI) and privacy, you will get answers ranging from a belief that AI will single-handedly bring down privacy, to AI will be the savior of privacy in today’s electronic world. The reality is that AI has the potential to be both.
Just to be clear, AI (“systems endowed with the intellectual processes characteristic of human beings, such as the ability to reason, discover meaning, generalize, or learn from past experience.”) has been around for a while – arguably from the 1950s or earlier. In fact, the term “artificial intelligence” was coined by John McCarthy in 1955 in a research proposal. The difference between then and now is a matter of degree – sophistication and application.
Recent advances in AI technology and the proliferation of data as the power steering of our world have combined to both pick up the speed of our race to use AI for competitive and public advantage and increase the pressure to pump the AI brakes due to privacy and other human issues. For those of us who watched the entirely reasonable but scary AI “H.A.L.” happily murder people in 2001: A Space Odyssey, a 1968 movie that depicted the negative consequence of AI as a caretaker of human life, perhaps there is a little popular culture flavor to basic paranoia.
But is it paranoia, really? Privacy and technology experts point to real-world, reasonable flaws and pitfalls of AI. These include:
- Privacy
- Security
- Misuse
- Bias and discrimination
- Flawed outcomes
Privacy
AI needs data – sometimes substantial amounts of data – to train. Plus, one of the advantages of using AI is the ability to sift through enormous amounts of data in application. Either way, where those large data sets include personal data, normal privacy concerns of legal basis, consent, transparency, and data minimization become multiplied.
Moreover, most data sets were not collected originally with AI training and use in mind. Given the copious and older data in question, it would be impossible to re-permission those millions of records for AI purposes. This means that the principles of transparency and consent become tougher, if not impossible to handle. We’ve seen this happen on the world’s stage with brands like Adobe and Meta, that have actively taken existing data to train models with an opt-out method.
Security
Like the security concerns of “big data,” large pools of data combined for AI purposes bring with them data breach worries.
Also, some AI technologies loop back into results information from its searches. This means that a user with higher level access permissions can unintentionally bring protected data into a new data set that gives other users with lower-level access the same access to the new, more sensitive data.
Misuse
One of the more publicized misuses of AI is to create “deep fakes.” Deep fakes are “believable, realistic videos, pictures, audio, and texts of events which never happened.” Imagine, for example, seeing a video of yourself making a public statement supporting an ideology that runs counter to your own beliefs.
Or, as an actor who makes a living by being on the screen and doing voiceovers for animated characters, seeing your likeness or voice in a movie (but not getting paid). What about ‘fake news’ video clips that are indistinguishable from real events, but are totally made up? This is particularly dangerous when considering things like political campaigns, where AI-created content and misinformation can be spread rapidly to drive agendas. Read our Privacy at the polls report for more insights into how US voters feel about AI in the run up to the election.
Bias and discrimination
Though human decisions are sometimes flawed, so can be AI-driven outcomes. Bad data can result in biased AI output, but there are other technological and societal bases for AI bias. As the Harvard Business Review reports, “AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.”
This challenge can be difficult to overcome, as the opacity of some AI training models may unintentionally hide hints of bias, and of course AI-generated bias may match up with human bias, making the humans involved less likely to question biased AI output.
Many of the newer regulatory requirements and bills being considered center around guarding against bias and discrimination, especially in the HR context. Some of the protections include requiring deep research into bias and discrimination and attestations to fairness, prohibitions against using AI to make important decisions about people in the HR context without these types of confirmation of fairness, the opportunity to opt out and object to an AI-driven decision.
Flawed outcomes
Like the AI challenges about bias and discrimination, AI outcomes can also be flawed in other ways. AI can have trouble pulling out the key factors in accurately identifying something as belonging in a unique category (wolf versus husky, cat, etc.), may ‘hallucinate,’ or perceive erroneous patterns or have nonsensical outcomes.
Do the opportunities of AI outweigh the privacy risk?
The dramatic upside of AI done right, however, shows us an exciting potential future in which technology can help us make faster, better, decisions.
It also gives us the possibility to enhance privacy rather than degrade it. Just as humans can deploy AI in ways that help us in the realms of medicine, transportation, and business efficiency, the privacy field can find assistance in managing the ever-increasing complexity of consent, third party, and individual rights management.
Next generation Privacy Enabling Technologies (PETs) are integrating AI in ways that help automate, increasing availability of real-time information and management across geographies, data bases, and practices.
Additionally, AI can improve security measures. With the assistance of AI, companies can provide consumers more granular, in-context privacy transparency and controls. Through AI, companies also have assistance in managing cross-jurisdictional consent in meaningful ways.
The key to striking the right balance between taking advantage of the power of AI for privacy and other complex matters while avoiding privacy, security, bias, and other potential pitfalls is cross functional work. Technology, security, and privacy experts must collaborate to build privacy into AI tools.
Conversely, these specialists must collaborate to build AI tools that support privacy. Regulators must consider the inherent pressures and adapt laws and take enforcement actions to support the privacy framing for AI – Fairness, Accountability, and Transparency, or FAT. Together, we can both have FAT AI and use AI to further privacy goals and activities.
Read our latest guide: Managing consent and privacy in the age of AI
As organizations seek new ways to utilize AI, privacy teams must be prepared to face the challenges head-on without becoming the blocker to innovation…
Read our guide to find out more about:
- The need for robust Consent Management in AI
- Building a scalable Consent Management Platform
- Operationalizing Consent Management in AI projects